Skip to content

feat: atomworks encoding#335

Open
kierandidi wants to merge 5 commits intojflat06/sparse-dispatchfrom
kdidi/atomworks_input
Open

feat: atomworks encoding#335
kierandidi wants to merge 5 commits intojflat06/sparse-dispatchfrom
kdidi/atomworks_input

Conversation

@kierandidi
Copy link
Copy Markdown
Collaborator

@kierandidi kierandidi commented Feb 14, 2026

Adds tmol.io.pose_stack_from_atomworks(), enabling tmol to construct a PoseStack directly from AtomWorks unified atom encoding tensors.

What's included

  • tmol/io/pose_stack_from_atomworks.py — converts AtomWorks atom14/atom37 representations to tmol's internal coordinate format
  • tmol/tests/io/test_pose_stack_from_atomworks.py — comprehensive test suite covering single-chain, multi-chain, and batched inputs

Motivation
AtomWorks uses a unified atom encoding that differs from tmol's residue-based representation. This bridge allows ML models built on AtomWorks to use tmol's Rosetta energy function for loss computation and structure refinement without manual coordinate conversion.

@jflat06
Copy link
Copy Markdown
Collaborator

jflat06 commented Feb 23, 2026

Did you ever make the pickled atomworks file for testing? Or are we going to settle for the code generated one?

Copy link
Copy Markdown
Collaborator

@fdimaio fdimaio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (pending flake/black errors)

- black reformatted 39 files
- pre-commit: changed black hook from language: python/python3.11
  to language: system (uses the active environment's black)
- .flake8: added per-file-ignores for pose_stack_from_atomworks.py
  (E201, E231, E241: intentional whitespace alignment in atom name tables)

Made-with: Cursor
Comment thread .pre-commit-config.yaml Outdated
entry: black
language: python
language_version: python3.11
language: system
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this setting changing?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the commit where you made this modification. If I understand correctly, this change will not use the version that pre-commit installs but rather the version that pip grabs, is that right? The danger here is that with an un-pinned version of black that we'll end up with constant small cosmetic changes at every commit and it'll make it hard to see the important changes from the unimportant.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, it looks like we ought to be pinning a particular version of black in pre-commit using the rev tag?

rev: 24.1.1

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aleaverfay thanks for noting that! fixed the version now in precommit so should be consistent now

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.08%. Comparing base (831bcde) to head (9d99808).
⚠️ Report is 13 commits behind head on jflat06/sparse-dispatch.

Additional details and impacted files
@@                     Coverage Diff                     @@
##           jflat06/sparse-dispatch     #335      +/-   ##
===========================================================
+ Coverage                    95.03%   95.08%   +0.04%     
===========================================================
  Files                          300      302       +2     
  Lines                        23401    23626     +225     
===========================================================
+ Hits                         22239    22464     +225     
  Misses                        1162     1162              
Flag Coverage Δ
_shrug_Testing_CPU 89.65% <99.59%> (+0.09%) ⬆️
_shrug_Testing_CPU_debug_w_o_jit 91.75% <99.59%> (+0.07%) ⬆️
_shrug_Testing_CUDA 92.96% <100.00%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants